# Korean NLP
Char Ko Bert Small
Apache-2.0
A compact and efficient Transformer model designed specifically for Korean language processing tasks, utilizing syllable-level tokenization
Large Language Model
Transformers

C
MrBananaHuman
19
2
Kogrammar Base
MIT
Korean grammar correction model based on KOBART-v2, trained using the National Institute of Korean Language's spelling correction corpus
Large Language Model
Transformers Korean

K
theSOL1
111
3
Bert Fakenews
A classification model for detecting the authenticity of Korean news, capable of identifying false or misleading content
Text Classification
Transformers

B
JKKANG
14
1
Korean Sentiment Analysis Kcelectra
MIT
A Korean sentiment analysis model fine-tuned based on KcELECTRA-base-v2022, which achieved good results on the evaluation set.
Text Classification
Transformers

K
nlp04
6,616
6
Kosimcse Bert Multitask
KoSimCSE-BERT-multitask is a high-performance Korean sentence embedding model based on BERT architecture and optimized with multi-task learning strategy, specifically designed for generating high-quality Korean sentence embeddings.
Text Embedding
Transformers Korean

K
BM-K
827
8
Ko Core News Lg
CPU-optimized Korean processing pipeline with complete NLP capabilities including tokenization, POS tagging, dependency parsing, named entity recognition, etc.
Sequence Labeling Korean
K
spacy
52
2
Ko Core News Md
CPU-optimized Korean processing pipeline with complete NLP functions including tokenization, part-of-speech tagging, dependency parsing, named entity recognition, etc.
Sequence Labeling Korean
K
spacy
16
0
Ko Core News Sm
Korean processing pipeline optimized for CPU, including tokenization, part-of-speech tagging, dependency parsing, named entity recognition, etc.
Sequence Labeling Korean
K
spacy
62
1
Ko Sroberta Multitask
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Korean
K
jhgan
162.23k
115
Albert Kor Base
Albert base model trained on 70GB Korean text dataset, using 42,000 lowercase subword units
Large Language Model
Transformers Korean

A
kykim
4,633
6
Roberta Base
A RoBERTa model pretrained on Korean, suitable for various Korean natural language processing tasks.
Large Language Model
Transformers Korean

R
klue
1.2M
33
Kobart
MIT
Korean pre-trained model based on BART architecture, optimized for teachable natural language processing
Large Language Model
Transformers Korean

K
byeongal
17
0
Stanza Ko
Apache-2.0
Stanza is an efficient multilingual linguistic analysis toolkit that provides functionalities from raw text processing to syntactic parsing and named entity recognition.
Sequence Labeling Korean
S
stanfordnlp
148
5
Klue Roberta Small Nli Sts
This is a Korean sentence transformer model based on KLUE-RoBERTa-small, specifically designed for sentence similarity calculation and natural language inference tasks.
Text Embedding
Transformers Korean

K
ddobokki
141
4
Bert Kor Base
Korean BERT base model trained on a 70GB Korean text dataset, using 42,000 lowercase subword units.
Large Language Model Korean
B
kykim
89.96k
31
Klue Roberta Base Sae
RoBERTa model trained on Korean dataset for sentence intent understanding tasks
Large Language Model
Transformers

K
ehdwns1516
26
0
Klue Roberta Small 3i4k Intent Classification
A Korean intent classification model fine-tuned based on KLUE's RoBERTa-small model, designed to recognize 7 different intent types.
Text Classification
Transformers Korean

K
bespin-global
362
11
Featured Recommended AI Models